Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-37260834

RESUMO

Recently, deep learning networks have achieved considerable success in segmenting organs in medical images. Several methods have used volumetric information with deep networks to achieve segmentation accuracy. However, these networks suffer from interference, risk of overfitting, and low accuracy as a result of artifacts, in the case of very challenging objects like the brachial plexuses. In this paper, to address these issues, we synergize the strengths of high-level human knowledge (i.e., natural intelligence (NI)) with deep learning (i.e., artificial intelligence (AI)) for recognition and delineation of the thoracic brachial plexuses (BPs) in computed tomography (CT) images. We formulate an anatomy-guided deep learning hybrid intelligence approach for segmenting thoracic right and left brachial plexuses consisting of 2 key stages. In the first stage (AAR-R), objects are recognized based on a previously created fuzzy anatomy model of the body region with its key organs relevant for the task at hand wherein high-level human anatomic knowledge is precisely codified. The second stage (DL-D) uses information from AAR-R to limit the search region to just where each object is most likely to reside and performs encoder-decoder delineation in slices. The proposed method is tested on a dataset that consists of 125 images of the thorax acquired for radiation therapy planning of tumors in the thorax and achieves a Dice coefficient of 0.659.

2.
Res Sq ; 2023 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-36711962

RESUMO

Purpose: Tissue radiotracer activity measured from positron emission tomography (PET) images is an important biomarker that is clinically utilized for diagnosis, staging, prognostication, and treatment response assessment in patients with cancer and other clinical disorders. Using PET image values to define a normal range of metabolic activity for quantification purposes is challenging due to variations in patient-related factors and technical factors. Although the formulation of standardized uptake value (SUV) has compensated for some of these variabilities, significant non-standardness still persists. We propose an image processing method to substantially mitigate these variabilities. Methods: The standardization method is similar for activity concentration (AC) PET and SUV PET images with some differences and consists of two steps. The calibration step is performed only once for each of AC PET or SUV PET, employs a set of images of normal subjects, and requires a reference object, while the transformation step is executed for each patient image to be standardized. In the calibration step, a standardized scale is determined along with 3 key image intensity landmarks defined on it including the minimum percentile intensity smin, median intensity sm, and high percentile intensity smax. smin and sm are estimated based on image intensities within the body region in the normal calibration image set. The optimal value of the maximum percentile ß corresponding to the intensity smax is estimated via an optimization process by using the reference object to optimally separate the highly variable high uptake values from the normal uptake intensities. In the transformation step, the first two landmarks - the minimum percentile intensity pα(I), and the median intensity pm(I) - are found for the given image I for the body region, and the high percentile intensity pß(I) is determined corresponding to the optimally estimated high percentile value ß. Subsequently, intensities of I are mapped to the standard scale piecewise linearly for different segments.We employ three strategies for evaluation and comparison with other standardization methods: (i) Comparing coefficient of variation (CVO) of mean intensity within test objects O across different normal test subjects before and after standardization; (ii) Comparing mean absolute difference (MDO) of mean intensity within test objects O across different subjects in repeat scans before and after standardization; (iii) Comparing CVO of mean intensity across different normal subjects before and after standardization where the scans came from different brands of scanners. Results: Our data set consisted of 84 FDG-PET/CT scans of the body torso including 38 normal subjects and two repeat-scans of 23 patients. We utilized one of two objects - liver and spleen - as a reference object and the other for testing. The proposed standardization method reduced CVO and MDO by a factor of 3-8 in comparison to other standardization methods and no standardization. Upon standardization by our method, the image intensities (both for AC and SUV) from two different brands of scanners become statistically indistinguishable, while without standardization, they differ significantly and by a factor of 3-9. Conclusions: The proposed method is automatic, outperforms current standardization methods, and effectively overcomes the residual variation left over in SUV and inter-scanner variations.

3.
Med Phys ; 49(11): 7118-7149, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35833287

RESUMO

BACKGROUND: Automatic segmentation of 3D objects in computed tomography (CT) is challenging. Current methods, based mainly on artificial intelligence (AI) and end-to-end deep learning (DL) networks, are weak in garnering high-level anatomic information, which leads to compromised efficiency and robustness. This can be overcome by incorporating natural intelligence (NI) into AI methods via computational models of human anatomic knowledge. PURPOSE: We formulate a hybrid intelligence (HI) approach that integrates the complementary strengths of NI and AI for organ segmentation in CT images and illustrate performance in the application of radiation therapy (RT) planning via multisite clinical evaluation. METHODS: The system employs five modules: (i) body region recognition, which automatically trims a given image to a precisely defined target body region; (ii) NI-based automatic anatomy recognition object recognition (AAR-R), which performs object recognition in the trimmed image without DL and outputs a localized fuzzy model for each object; (iii) DL-based recognition (DL-R), which refines the coarse recognition results of AAR-R and outputs a stack of 2D bounding boxes (BBs) for each object; (iv) model morphing (MM), which deforms the AAR-R fuzzy model of each object guided by the BBs output by DL-R; and (v) DL-based delineation (DL-D), which employs the object containment information provided by MM to delineate each object. NI from (ii), AI from (i), (iii), and (v), and their combination from (iv) facilitate the HI system. RESULTS: The HI system was tested on 26 organs in neck and thorax body regions on CT images obtained prospectively from 464 patients in a study involving four RT centers. Data sets from one separate independent institution involving 125 patients were employed in training/model building for each of the two body regions, whereas 104 and 110 data sets from the 4 RT centers were utilized for testing on neck and thorax, respectively. In the testing data sets, 83% of the images had limitations such as streak artifacts, poor contrast, shape distortion, pathology, or implants. The contours output by the HI system were compared to contours drawn in clinical practice at the four RT centers by utilizing an independently established ground-truth set of contours as reference. Three sets of measures were employed: accuracy via Dice coefficient (DC) and Hausdorff boundary distance (HD), subjective clinical acceptability via a blinded reader study, and efficiency by measuring human time saved in contouring by the HI system. Overall, the HI system achieved a mean DC of 0.78 and 0.87 and a mean HD of 2.22 and 4.53 mm for neck and thorax, respectively. It significantly outperformed clinical contouring in accuracy and saved overall 70% of human time over clinical contouring time, whereas acceptability scores varied significantly from site to site for both auto-contours and clinically drawn contours. CONCLUSIONS: The HI system is observed to behave like an expert human in robustness in the contouring task but vastly more efficiently. It seems to use NI help where image information alone will not suffice to decide, first for the correct localization of the object and then for the precise delineation of the boundary.


Assuntos
Inteligência Artificial , Humanos , Tomografia Computadorizada de Feixe Cônico
4.
Med Image Anal ; 81: 102527, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35830745

RESUMO

PURPOSE: Despite advances in deep learning, robust medical image segmentation in the presence of artifacts, pathology, and other imaging shortcomings has remained a challenge. In this paper, we demonstrate that by synergistically marrying the unmatched strengths of high-level human knowledge (i.e., natural intelligence (NI)) with the capabilities of deep learning (DL) networks (i.e., artificial intelligence (AI)) in garnering intricate details, these challenges can be significantly overcome. Focusing on the object recognition task, we formulate an anatomy-guided deep learning object recognition approach named AAR-DL which combines an advanced anatomy-modeling strategy, model-based non-deep-learning object recognition, and deep learning object detection networks to achieve expert human-like performance. METHODS: The AAR-DL approach consists of 4 key modules wherein prior knowledge (NI) is made use of judiciously at every stage. In the first module AAR-R, objects are recognized based on a previously created fuzzy anatomy model of the body region with all its organs following the automatic anatomy recognition (AAR) approach wherein high-level human anatomic knowledge is precisely codified. This module is purely model-based with no DL involvement. Although the AAR-R operation lacks accuracy, it is robust to artifacts and deviations (much like NI), and provides the much-needed anatomic guidance in the form of rough regions-of-interest (ROIs) for the following DL modules. The 2nd module DL-R makes use of the ROI information to limit the search region to just where each object is most likely to reside and performs DL-based detection of the 2D bounding boxes (BBs) in slices. The 2D BBs hug the shape of the 3D object much better than 3D BBs and their detection is feasible only due to anatomy guidance from AAR-R. In the 3rd module, the AAR model is deformed via the found 2D BBs providing refined model information which now embodies both NI and AI decisions. The refined AAR model more actively guides the 4th refined DL-R module to perform final object detection via DL. Anatomy knowledge is made use of in designing the DL networks wherein spatially sparse objects and non-sparse objects are handled differently to provide the required level of attention for each. RESULTS: Utilizing 150 thoracic and 225 head and neck (H&N) computed tomography (CT) data sets of cancer patients undergoing routine radiation therapy planning, the recognition performance of the AAR-DL approach is evaluated on 10 thoracic and 16 H&N organs in comparison to pure model-based approach (AAR-R) and pure DL approach without anatomy guidance. Recognition accuracy is assessed via location error/ centroid distance error, scale or size error, and wall distance error. The results demonstrate how the errors are gradually and systematically reduced from the 1st module to the 4th module as high-level knowledge is infused via NI at various stages into the processing pipeline. This improvement is especially dramatic for sparse and artifact-prone challenging objects, achieving a location error over all objects of 4.4 mm and 4.3 mm for the two body regions, respectively. The pure DL approach failed on several very challenging sparse objects while AAR-DL achieved accurate recognition, almost matching human performance, showing the importance of anatomy guidance for robust operation. Anatomy guidance also reduces the time required for training DL networks considerably. CONCLUSIONS: (i) High-level anatomy guidance improves recognition performance of DL methods. (ii) This improvement is especially noteworthy for spatially sparse, low-contrast, inconspicuous, and artifact-prone objects. (iii) Once anatomy guidance is provided, 3D objects can be detected much more accurately via 2D BBs than 3D BBs and the 2D BBs represent object containment with much more specificity. (iv) Anatomy guidance brings stability and robustness to DL approaches for object localization. (v) The training time can be greatly reduced by making use of anatomy guidance.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Algoritmos , Inteligência Artificial , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos
6.
Artigo em Inglês | MEDLINE | ID: mdl-34887608

RESUMO

Multi-atlas segmentation methods will benefit from atlases covering the complete spectrum of population patterns, while the difficulties in generating such large enough datasets and the computation burden required in the segmentation procedure reduce its practicality in clinical application. In this work, we start from a viewpoint that different parts of the target object can be recognized by different atlases and propose a precision atlas selection strategy. By comparing regional similarity between target image and atlases, precision atlases are ranked and selected by the frequency of regional best match, which have no need to be globally similar to the target subject at either image-level or object-level, largely increasing the implicit patterns contained in the atlas set. In the proposed anatomy recognition method, atlas building is first achieved by all-to-template registration, where the minimum spanning tree (MST) strategy is used to select a registration template from a subset of radiologically near-normal images. Then, a two-stage recognition process is conducted: in rough recognition, sub-image level similarity is calculated between the test image and each image of the whole atlas set, and only the atlas with the highest similarity contributes to the recognition map regionally; in refined recognition, the atlases with the highest frequencies of best match are selected as the precision atlases and are utilized to further increase the accuracy of boundary matching. The proposed method is demonstrated on 298 computed tomography (CT) images and 9 organs in the Head & Neck (H&N) body region. Experimental results illustrate that our method is effective for organs with different segmentation challenge and samples with different image quality, where remarkable improvement in boundary interpretation is made by refined recognition and most objects achieve a localization error within 2 voxels.

7.
Med Phys ; 48(12): 7806-7825, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34668207

RESUMO

PURPOSE: In the multi-atlas segmentation (MAS) method, a large enough atlas set, which can cover the complete spectrum of the whole population pattern of the target object will benefit the segmentation quality. However, the difficulty in obtaining and generating such a large set of atlases and the computational burden required in the segmentation procedure make this approach impractical. In this paper, we propose a method called SOMA to select subject-, object-, and modality-adapted precision atlases for automatic anatomy recognition in medical images with pathology, following the idea that different regions of the target object in a novel image can be recognized by different atlases with regionally best similarity, so that effective atlases have no need to be globally similar to the target subject and also have no need to be overall similar to the target object. METHODS: The SOMA method consists of three main components: atlas building, object recognition, and object delineation. Considering the computational complexity, we utilize an all-to-template strategy to align all images to the same image space belonging to the root image determined by the minimum spanning tree (MST) strategy among a subset of radiologically near-normal images. The object recognition process is composed of two stages: rough recognition and refined recognition. In rough recognition, subimage matching is conducted between the test image and each image of the whole atlas set, and only the atlas corresponding to the best-matched subimage contributes to the recognition map regionally. The frequency of best match for each atlas is recorded by a counter, and the atlases with the highest frequencies are selected as the precision atlases. In refined recognition, only the precision atlases are examined, and the subimage matching is conducted in a nonlocal manner of searching to further increase the accuracy of boundary matching. Delineation is based on a U-net-based deep learning network, where the original gray scale image together with the fuzzy map from refined recognition compose a two-channel input to the network, and the output is a segmentation map of the target object. RESULTS: Experiments are conducted on computed tomography (CT) images with different qualities in two body regions - head and neck (H&N) and thorax, from 298 subjects with nine objects and 241 subjects with six objects, respectively. Most objects achieve a localization error within two voxels after refined recognition, with marked improvement in localization accuracy from rough to refined recognition of 0.6-3 mm in H&N and 0.8-4.9 mm in thorax, and also in delineation accuracy (Dice coefficient) from refined recognition to delineation of 0.01-0.11 in H&N and 0.01-0.18 in thorax. CONCLUSIONS: The SOMA method shows high accuracy and robustness in anatomy recognition and delineation. The improvements from rough to refined recognition and further to delineation, as well as immunity of recognition accuracy to varying image and object qualities, demonstrate the core principles of SOMA where segmentation accuracy increases with precision atlases and gradually refined object matching.


Assuntos
Tórax , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador
8.
Med Phys ; 47(8): 3467-3484, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32418221

RESUMO

PURPOSE: The derivation of quantitative information from medical images in a practical manner is essential for quantitative radiology (QR) to become a clinical reality, but still faces a major hurdle because of image segmentation challenges. With the goal of performing disease quantification in lymph node (LN) stations without explicit nodal delineation, this paper presents a novel approach for disease quantification (DQ) by automatic recognition of LN zones and detection of malignant lymph nodes within thoracic LN zones via positron emission tomography/computed tomography (PET/CT) images. Named AAR-LN-DQ, this approach decouples DQ methods from explicit nodal segmentation via an LN recognition strategy involving a novel globular filter and a deep neural network called SegNet. METHOD: The methodology consists of four main steps: (a) Building lymph node zone models by automatic anatomy recognition (AAR) method. It incorporates novel aspects of model building that relate to finding an optimal hierarchy for organs and lymph node zones in the thorax. (b) Recognizing lymph node zones by the built lymph node models. (c) Detecting pathologic LNs in the recognized zones by using a novel globular filter (g-filter) and a multi-level support vector machine (SVM) classifier. Here, we make use of the general globular shape of LNs to first localize them and then use a multi-level SVM classifier to identify pathologic LNs from among the LNs localized by the g-filter. Alternatively, we designed a deep neural network called SegNet which is trained to directly recognize pathologic nodes within AAR localized LN zones. (d) Disease quantification based on identified pathologic LNs within localized zones. A fuzzy disease map is devised to express the degree of disease burden at each voxel within the identified LNs to simultaneously handle several uncertain phenomena such as PET partial volume effects, uncertainty in localization of LNs, and gradation of disease content at the voxel level. We focused on the task of disease quantification in patients with lymphoma based on PET/CT acquisitions and devised a method of evaluation. Model building was carried out using 42 near-normal patient datasets via contrast-enhanced CT examinations of their thorax. PET/CT datasets from an additional 63 lymphoma patients were utilized for evaluating the AAR-LN-DQ methodology. We assess the accuracy of the three main processes involved in AAR-LN-DQ via fivefold cross validation: lymph node zone recognition, abnormal lymph node localization, and disease quantification. RESULTS: The recognition and scale error for LN zones were 12.28 mm ± 1.99 and 0.94 ± 0.02, respectively, on normal CT datasets. On abnormal PET/CT datasets, the sensitivity and specificity of pathologic LN recognition were 84.1% ± 0.115 and 98.5% ± 0.003, respectively, for the g-filter-SVM strategy, and 91.3% ± 0.110 and 96.1% ± 0.016, respectively, for the SegNet method. Finally, the mean absolute percent errors for disease quantification of the recognized abnormal LNs were 8% ± 0.09 and 14% ± 0.10 for the g-filter-SVM method and the best SegNet strategy, respectively. CONCLUSIONS: Accurate disease quantification on PET/CT images without performing explicit delineation of lymph nodes is feasible following lymph node zone and pathologic LN localization. It is very useful to perform LN zone recognition by AAR as this step can cover most (95.8%) of the abnormal LNs and drastically reduce the regions to search for abnormal LNs. This also improves the specificity of deep networks such as SegNet significantly. It is possible to utilize general shape information about LNs such as their globular nature via g-filter and to arrive at high recognition rates for abnormal LNs in conjunction with a traditional classifier such as SVM. Finally, the disease map concept is effective for estimating disease burden, irrespective of how the LNs are identified, to handle various uncertainties without having to address them explicitly one by one.


Assuntos
Fluordesoxiglucose F18 , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Linfonodos/diagnóstico por imagem , Estadiamento de Neoplasias , Tomografia por Emissão de Pósitrons , Tórax , Tomografia Computadorizada por Raios X
9.
Med Image Anal ; 54: 45-62, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30831357

RESUMO

Contouring (segmentation) of Organs at Risk (OARs) in medical images is required for accurate radiation therapy (RT) planning. In current clinical practice, OAR contouring is performed with low levels of automation. Although several approaches have been proposed in the literature for improving automation, it is difficult to gain an understanding of how well these methods would perform in a realistic clinical setting. This is chiefly due to three key factors - small number of patient studies used for evaluation, lack of performance evaluation as a function of input image quality, and lack of precise anatomic definitions of OARs. In this paper, extending our previous body-wide Automatic Anatomy Recognition (AAR) framework to RT planning of OARs in the head and neck (H&N) and thoracic body regions, we present a methodology called AAR-RT to overcome some of these hurdles. AAR-RT follows AAR's 3-stage paradigm of model-building, object-recognition, and object-delineation. Model-building: Three key advances were made over AAR. (i) AAR-RT (like AAR) starts off with a computationally precise definition of the two body regions and all of their OARs. Ground truth delineations of OARs are then generated following these definitions strictly. We retrospectively gathered patient data sets and the associated contour data sets that have been created previously in routine clinical RT planning from our Radiation Oncology department and mended the contours to conform to these definitions. We then derived an Object Quality Score (OQS) for each OAR sample and an Image Quality Score (IQS) for each study, both on a 1-to-10 scale, based on quality grades assigned to each OAR sample following 9 key quality criteria. Only studies with high IQS and high OQS for all of their OARs were selected for model building. IQS and OQS were employed for evaluating AAR-RT's performance as a function of image/object quality. (ii) In place of the previous hand-crafted hierarchy for organizing OARs in AAR, we devised a method to find an optimal hierarchy for each body region. Optimality was based on minimizing object recognition error. (iii) In addition to the parent-to-child relationship encoded in the hierarchy in previous AAR, we developed a directed probability graph technique to further improve recognition accuracy by learning and encoding in the model "steady" relationships that may exist among OAR boundaries in the three orthogonal planes. Object-recognition: The two key improvements over the previous approach are (i) use of the optimal hierarchy for actual recognition of OARs in a given image, and (ii) refined recognition by making use of the trained probability graph. Object-delineation: We use a kNN classifier confined to the fuzzy object mask localized by the recognition step and then fit optimally the fuzzy mask to the kNN-derived voxel cluster to bring back shape constraint on the object. We evaluated AAR-RT on 205 thoracic and 298 H&N (total 503) studies, involving both planning and re-planning scans and a total of 21 organs (9 - thorax, 12 - H&N). The studies were gathered from two patient age groups for each gender - 40-59 years and 60-79 years. The number of 3D OAR samples analyzed from the two body regions was 4301. IQS and OQS tended to cluster at the two ends of the score scale. Accordingly, we considered two quality groups for each gender - good and poor. Good quality data sets typically had OQS ≥ 6 and had distortions, artifacts, pathology etc. in not more than 3 slices through the object. The number of model-worthy data sets used for training were 38 for thorax and 36 for H&N, and the remaining 479 studies were used for testing AAR-RT. Accordingly, we created 4 anatomy models, one each for: Thorax male (20 model-worthy data sets), Thorax female (18 model-worthy data sets), H&N male (20 model-worthy data sets), and H&N female (16 model-worthy data sets). On "good" cases, AAR-RT's recognition accuracy was within 2 voxels and delineation boundary distance was within ∼1 voxel. This was similar to the variability observed between two dosimetrists in manually contouring 5-6 OARs in each of 169 studies. On "poor" cases, AAR-RT's errors hovered around 5 voxels for recognition and 2 voxels for boundary distance. The performance was similar on planning and replanning cases, and there was no gender difference in performance. AAR-RT's recognition operation is much more robust than delineation. Understanding object and image quality and how they influence performance is crucial for devising effective object recognition and delineation algorithms. OQS seems to be more important than IQS in determining accuracy. Streak artifacts arising from dental implants and fillings and beam hardening from bone pose the greatest challenge to auto-contouring methods.


Assuntos
Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Órgãos em Risco/diagnóstico por imagem , Planejamento da Radioterapia Assistida por Computador/métodos , Neoplasias Torácicas/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Adulto , Idoso , Pontos de Referência Anatômicos , Feminino , Neoplasias de Cabeça e Pescoço/radioterapia , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Anatômicos , Reconhecimento Automatizado de Padrão , Estudos Retrospectivos , Neoplasias Torácicas/radioterapia
10.
Med Image Anal ; 51: 169-183, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30453165

RESUMO

PURPOSE: The derivation of quantitative information from images in a clinically practical way continues to face a major hurdle because of image segmentation challenges. This paper presents a novel approach, called automatic anatomy recognition-disease quantification (AAR-DQ), for disease quantification (DQ) on positron emission tomography/computed tomography (PET/CT) images. This approach explores how to decouple DQ methods from explicit dependence on object (e.g., organ) delineation through the use of only object recognition results from our recently developed automatic anatomy recognition (AAR) method to quantify disease burden. METHOD: The AAR-DQ process starts off with the AAR approach for modeling anatomy and automatically recognizing objects on low-dose CT images of PET/CT acquisitions. It incorporates novel aspects of model building that relate to finding an optimal disease map for each organ. The parameters of the disease map are estimated from a set of training image data sets including normal subjects and patients with metastatic cancer. The result of recognition for an object on a patient image is the location of a fuzzy model for the object which is optimally adjusted for the image. The model is used as a fuzzy mask on the PET image for estimating a fuzzy disease map for the specific patient and subsequently for quantifying disease based on this map. This process handles blur arising in PET images from partial volume effect entirely through accurate fuzzy mapping to account for heterogeneity and gradation of disease content at the voxel level without explicitly performing correction for the partial volume effect. Disease quantification is performed from the fuzzy disease map in terms of total lesion glycolysis (TLG) and standardized uptake value (SUV) statistics. We also demonstrate that the method of disease quantification is applicable even when the "object" of interest is recognized manually with a simple and quick action such as interactively specifying a 3D box ROI. Depending on the degree of automaticity for object and lesion recognition on PET/CT, DQ can be performed at the object level either semi-automatically (DQ-MO) or automatically (DQ-AO), or at the lesion level either semi-automatically (DQ-ML) or automatically. RESULTS: We utilized 67 data sets in total: 16 normal data sets used for model building, and 20 phantom data sets plus 31 patient data sets (with various types of metastatic cancer) used for testing the three methods DQ-AO, DQ-MO, and DQ-ML. The parameters of the disease map were estimated using the leave-one-out strategy. The organs of focus were left and right lungs and liver, and the disease quantities measured were TLG, SUVMean, and SUVMax. On phantom data sets, overall error for the three parameters were approximately 6%, 3%, and 0%, respectively, with TLG error varying from 2% for large "lesions" (37 mm diameter) to 37% for small "lesions" (10 mm diameter). On patient data sets, for non-conspicuous lesions, those overall errors were approximately 19%, 14% and 0%; for conspicuous lesions, these overall errors were approximately 9%, 7%, 0%, respectively, with errors in estimation being generally smaller for liver than for lungs, although without statistical significance. CONCLUSIONS: Accurate disease quantification on PET/CT images without performing explicit delineation of lesions is feasible following object recognition. Method DQ-MO generally yields more accurate results than DQ-AO although the difference is statistically not significant. Compared to current methods from the literature, almost all of which focus only on lesion-level DQ and not organ-level DQ, our results were comparable for large lesions and were superior for smaller lesions, with less demand on training data and computational resources. DQ-AO and even DQ-MO seem to have the potential for quantifying disease burden body-wide routinely via the AAR-DQ approach.


Assuntos
Diagnóstico por Computador/métodos , Aumento da Imagem/métodos , Neoplasias/diagnóstico por imagem , Reconhecimento Automatizado de Padrão/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Algoritmos , Lógica Fuzzy , Humanos
11.
Artigo em Inglês | MEDLINE | ID: mdl-30190627

RESUMO

Currently, there are many papers that have been published on the detection and segmentation of lymph nodes from medical images. However, it is still a challenging problem owing to low contrast with surrounding soft tissues and the variations of lymph node size and shape on computed tomography (CT) images. This is particularly very difficult on low-dose CT of PET/CT acquisitions. In this study, we utilize our previous automatic anatomy recognition (AAR) framework to recognize the thoracic-lymph node stations defined by the International Association for the Study of Lung Cancer (IASLC) lymph node map. The lymph node stations themselves are viewed as anatomic objects and are localized by using a one-shot method in the AAR framework. Two strategies have been taken in this paper for integration into AAR framework. The first is to combine some lymph node stations into composite lymph node stations according to their geometrical nearness. The other is to find the optimal parent (organ or union of organs) as an anchor for each lymph node station based on the recognition error and thereby find an overall optimal hierarchy to arrange anchor organs and lymph node stations. Based on 28 contrast-enhanced thoracic CT image data sets for model building, 12 independent data sets for testing, our results show that thoracic lymph node stations can be localized within 2-3 voxels compared to the ground truth.

12.
Artigo em Inglês | MEDLINE | ID: mdl-30190628

RESUMO

The recently developed body-wide Automatic Anatomy Recognition (AAR) methodology depends on fuzzy modeling of individual objects, hierarchically arranging objects, constructing an anatomy ensemble of these models, and a dichotomous object recognition-delineation process. The parent-to-offspring spatial relationship in the object hierarchy is crucial in the AAR method. We have found this relationship to be quite complex, and as such any improvement in capturing this relationship information in the anatomy model will improve the process of recognition itself. Currently, the method encodes this relationship based on the layout of the geometric centers of the objects. Motivated by the concept of virtual landmarks (VLs), this paper presents a new one-shot AAR recognition method that utilizes the VLs to learn object relationships by training a neural network to predict the pose and the VLs of an offspring object given the VLs of the parent object in the hierarchy. We set up two neural networks for each parent-offspring object pair in a body region, one for predicting the VLs and another for predicting the pose parameters. The VL-based learning/prediction method is evaluated on two object hierarchies involving 14 objects. We utilize 54 computed tomography (CT) image data sets of head and neck cancer patients and the associated object contours drawn by dosimetrists for routine radiation therapy treatment planning. The VL neural network method is found to yield more accurate object localization than the currently used simple AAR method.

13.
Artigo em Inglês | MEDLINE | ID: mdl-30190629

RESUMO

Contouring of the organs at risk is a vital part of routine radiation therapy planning. For the head and neck (H&N) region, this is more challenging due to the complexity of anatomy, the presence of streak artifacts, and the variations of object appearance. In this paper, we describe the latest advances in our Automatic Anatomy Recognition (AAR) approach, which aims to automatically contour multiple objects in the head and neck region on planning CT images. Our method has three major steps: model building, object recognition, and object delineation. First, the better-quality images from our cohort of H&N CT studies are used to build fuzzy models and find the optimal hierarchy for arranging objects based on the relationship between objects. Then, the object recognition step exploits the rich prior anatomic information encoded in the hierarchy to derive the location and pose for each object, which leads to generalizable and robust methods and mitigation of object localization challenges. Finally, the delineation algorithms employ local features to contour the boundary based on object recognition results. We make several improvements within the AAR framework, including finding recognition-error-driven optimal hierarchy, modeling boundary relationships, combining texture and intensity, and evaluating object quality. Experiments were conducted on the largest ensemble of clinical data sets reported to date, including 216 planning CT studies and over 2,600 object samples. The preliminary results show that on data sets with minimal (<4 slices) streak artifacts and other deviations, overall recognition accuracy reaches 2 voxels, with overall delineation Dice coefficient close to 0.8 and Hausdorff Distance within 1 voxel.

14.
Artigo em Inglês | MEDLINE | ID: mdl-30190630

RESUMO

Segmentation of organs at risk (OARs) is a key step during the radiation therapy (RT) treatment planning process. Automatic anatomy recognition (AAR) is a recently developed body-wide multiple object segmentation approach, where segmentation is designed as two dichotomous steps: object recognition (or localization) and object delineation. Recognition is the high-level process of determining the whereabouts of an object, and delineation is the meticulous low-level process of precisely indicating the space occupied by an object. This study focuses on recognition. The purpose of this paper is to introduce new features of the AAR-recognition approach (abbreviated as AAR-R from now on) of combining texture and intensity information into the recognition procedure, using the optimal spanning tree to achieve the optimal hierarchy for recognition to minimize recognition errors, and to illustrate recognition performance by using large-scale testing computed tomography (CT) data sets. The data sets pertain to 216 non-serial (planning) and 82 serial (re-planning) studies of head and neck (H&N) cancer patients undergoing radiation therapy, involving a total of ~2600 object samples. Texture property "maximum probability of occurrence" derived from the co-occurrence matrix was determined to be the best property and is utilized in conjunction with intensity properties in AAR-R. An optimal spanning tree is found in the complete graph whose nodes are individual objects, and then the tree is used as the hierarchy in recognition. Texture information combined with intensity can significantly reduce location error for gland-related objects (parotid and submandibular glands). We also report recognition results by considering image quality, which is a novel concept. AAR-R with new features achieves a location error of less than 4 mm (~1.5 voxels in our studies) for good quality images for both serial and non-serial studies.

15.
PLoS One ; 12(1): e0168932, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28046024

RESUMO

PURPOSE: Overweight and underweight conditions are considered relative contraindications to lung transplantation due to their association with excess mortality. Yet, recent work suggests that body mass index (BMI) does not accurately reflect adipose tissue mass in adults with advanced lung diseases. Alternative and more accurate measures of adiposity are needed. Chest fat estimation by routine computed tomography (CT) imaging may therefore be important for identifying high-risk lung transplant candidates. In this paper, an approach to chest fat quantification and quality assessment based on a recently formulated concept of standardized anatomic space (SAS) is presented. The goal of the paper is to seek answers to several key questions related to chest fat quantity and quality assessment based on a single slice CT (whether in the chest, abdomen, or thigh) versus a volumetric CT, which have not been addressed in the literature. METHODS: Unenhanced chest CT image data sets from 40 adult lung transplant candidates (age 58 ± 12 yrs and BMI 26.4 ± 4.3 kg/m2), 16 with chronic obstructive pulmonary disease (COPD), 16 with idiopathic pulmonary fibrosis (IPF), and the remainder with other conditions were analyzed together with a single slice acquired for each patient at the L5 vertebral level and mid-thigh level. The thoracic body region and the interface between subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in the chest were consistently defined in all patients and delineated using Live Wire tools. The SAT and VAT components of chest were then segmented guided by this interface. The SAS approach was used to identify the corresponding anatomic slices in each chest CT study, and SAT and VAT areas in each slice as well as their whole volumes were quantified. Similarly, the SAT and VAT components were segmented in the abdomen and thigh slices. Key parameters of the attenuation (Hounsfield unit (HU) distributions) were determined from each chest slice and from the whole chest volume separately for SAT and VAT components. The same parameters were also computed from the single abdominal and thigh slices. The ability of the slice at each anatomic location in the chest (and abdomen and thigh) to act as a marker of the measures derived from the whole chest volume was assessed via Pearson correlation coefficient (PCC) analysis. RESULTS: The SAS approach correctly identified slice locations in different subjects in terms of vertebral levels. PCC between chest fat volume and chest slice fat area was maximal at the T8 level for SAT (0.97) and at the T7 level for VAT (0.86), and was modest between chest fat volume and abdominal slice fat area for SAT and VAT (0.73 and 0.75, respectively). However, correlation was weak for chest fat volume and thigh slice fat area for SAT and VAT (0.52 and 0.37, respectively), and for chest fat volume for SAT and VAT and BMI (0.65 and 0.28, respectively). These same single slice locations with maximal PCC were found for SAT and VAT within both COPD and IPF groups. Most of the attenuation properties derived from the whole chest volume and single best chest slice for VAT (but not for SAT) were significantly different between COPD and IPF groups. CONCLUSIONS: This study demonstrates a new way of optimally selecting slices whose measurements may be used as markers of similar measurements made on the whole chest volume. The results suggest that one or two slices imaged at T7 and T8 vertebral levels may be enough to estimate reliably the total SAT and VAT components of chest fat and the quality of chest fat as determined by attenuation distributions in the entire chest volume.


Assuntos
Tecido Adiposo/diagnóstico por imagem , Transplante de Pulmão , Pulmão/anatomia & histologia , Tórax/diagnóstico por imagem , Adiposidade , Adulto , Idoso , Índice de Massa Corporal , Feminino , Humanos , Fibrose Pulmonar Idiopática/diagnóstico por imagem , Fibrose Pulmonar Idiopática/cirurgia , Processamento de Imagem Assistida por Computador , Masculino , Pessoa de Meia-Idade , Dinâmica não Linear , Variações Dependentes do Observador , Doença Pulmonar Obstrutiva Crônica/diagnóstico por imagem , Doença Pulmonar Obstrutiva Crônica/cirurgia , Tomografia Computadorizada por Raios X
16.
Artigo em Inglês | MEDLINE | ID: mdl-30158739

RESUMO

Much has been published on finding landmarks on object surfaces in the context of shape modeling. While this is still an open problem, many of the challenges of past approaches can be overcome by removing the restriction that landmarks must be on the object surface. The virtual landmarks we propose may reside inside, on the boundary of, or outside the object and are tethered to the object. Our solution is straightforward, simple, and recursive in nature, proceeding from global features initially to local features in later levels to detect landmarks. Principal component analysis (PCA) is used as an engine to recursively subdivide the object region. The object itself may be represented in binary or fuzzy form or with gray values. The method is illustrated in 3D space (although it generalizes readily to spaces of any dimensionality) on four objects (liver, trachea and bronchi, and outer boundaries of left and right lungs along pleura) derived from 5 patient computed tomography (CT) image data sets of the thorax and abdomen. The virtual landmark identification approach seems to work well on different structures in different subjects and seems to detect landmarks that are homologously located in different samples of the same object. The approach guarantees that virtual landmarks are invariant to translation, scaling, and rotation of the object/image. Landmarking techniques are fundamental for many computer vision and image processing applications, and we are currently exploring the use virtual landmarks in automatic anatomy recognition and object analytics.

17.
Artigo em Inglês | MEDLINE | ID: mdl-30220769

RESUMO

Lung delineation via dynamic 4D thoracic magnetic resonance imaging (MRI) is necessary for quantitative image analysis for studying pediatric respiratory diseases such as thoracic insufficiency syndrome (TIS). This task is very challenging because of the often-extreme malformations of the thorax in TIS, lack of signal from bone and connective tissues resulting in inadequate image quality, abnormal thoracic dynamics, and the inability of the patients to cooperate with the protocol needed to get good quality images. We propose an interactive fuzzy connectedness approach as a potential practical solution to this difficult problem. Manual segmentation is too labor intensive especially due to the 4D nature of the data and can lead to low repeatability of the segmentation results. Registration-based approaches are somewhat inefficient and may produce inaccurate results due to accumulated registration errors and inadequate boundary information. The proposed approach works in a manner resembling the Iterative Livewire tool but uses iterative relative fuzzy connectedness (IRFC) as the delineation engine. Seeds needed by IRFC are set manually and are propagated from slice-to-slice, decreasing the needed human labor, and then a fuzzy connectedness map is automatically calculated almost instantaneously. If the segmentation is acceptable, the user selects "next" slice. Otherwise, the seeds are refined and the process continues. Although human interaction is needed, an advantage of the method is the high level of efficient user-control on the process and non-necessity to refine the results. Dynamic MRI sequences from 5 pediatric TIS patients involving 39 3D spatial volumes are used to evaluate the proposed approach. The method is compared to two other IRFC strategies with a higher level of automation. The proposed method yields an overall true positive and false positive volume fraction of 0.91 and 0.03, respectively, and Hausdorff boundary distance of 2 mm.

18.
Med Phys ; 43(5): 2323, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-27147344

RESUMO

PURPOSE: There are several disease conditions that lead to upper airway restrictive disorders. In the study of these conditions, it is important to take into account the dynamic nature of the upper airway. Currently, dynamic magnetic resonance imaging is the modality of choice for studying these diseases. Unfortunately, the contrast resolution obtainable in the images poses many challenges for an effective segmentation of the upper airway structures. No viable methods have been developed to date to solve this problem. In this paper, the authors demonstrate a practical solution by employing an iterative relative fuzzy connectedness delineation algorithm as a tool. METHODS: 3D dynamic images were collected at ten equally spaced instances over the respiratory cycle (i.e., 4D) in 20 female subjects with obstructive sleep apnea syndrome. The proposed segmentation approach consists of the following steps. First, image background nonuniformities are corrected which is then followed by a process to correct for the nonstandardness of MR image intensities. Next, standardized image intensity statistics are gathered for the nasopharynx and oropharynx portions of the upper airway as well as the surrounding soft tissue structures including air outside the body region, hard palate, soft palate, tongue, and other soft structures around the airway including tonsils (left and right) and adenoid. The affinity functions needed for fuzzy connectedness computation are derived based on these tissue intensity statistics. In the next step, seeds for fuzzy connectedness computation are specified for the airway and the background tissue components. Seed specification is needed in only the 3D image corresponding to the first time instance of the 4D volume; from this information, the 3D volume corresponding to the first time point is segmented. Seeds are automatically generated for the next time point from the segmentation of the 3D volume corresponding to the previous time point, and the process continues and runs without human interaction and completes in 10 s for segmenting the airway structure in the whole 4D volume. RESULTS: Qualitative evaluations performed to examine smoothness and continuity of motions of the entire upper airway as well as its transverse sections at critical anatomic locations indicate that the segmentations are consistent. Quantitative evaluations of the separate 200 3D volumes and the 20 4D volumes yielded true positive and false positive volume fractions around 95% and 0.1%, respectively, and mean boundary placement errors under 0.5 mm. The method is robust to variations in the subjective action of seed specification. Compared with a segmentation approach based on a registration technique to propagate segmentations, the proposed method is more efficient, accurate, and less prone to error propagation from one respiratory time point to the next. CONCLUSIONS: The proposed method is the first demonstration of a viable and practical approach for segmenting the upper airway structures in dynamic MR images. Compared to registration-based methods, it effectively reduces error propagation and consequently achieves not only more accurate segmentations but also more consistent motion representation in the segmentations. The method is practical, requiring minimal user interaction and computational time.


Assuntos
Lógica Fuzzy , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Sistema Respiratório/diagnóstico por imagem , Adolescente , Feminino , Humanos , Movimento (Física) , Reconhecimento Automatizado de Padrão/métodos , Síndrome do Ovário Policístico/diagnóstico por imagem , Síndrome do Ovário Policístico/fisiopatologia , Reprodutibilidade dos Testes , Respiração , Sistema Respiratório/fisiopatologia , Apneia Obstrutiva do Sono/diagnóstico por imagem , Apneia Obstrutiva do Sono/fisiopatologia , Fatores de Tempo
19.
Med Phys ; 43(3): 1487-500, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26936732

RESUMO

PURPOSE: In an attempt to overcome several hurdles that exist in organ segmentation approaches, the authors previously described a general automatic anatomy recognition (AAR) methodology for segmenting all major organs in multiple body regions body-wide [J. K. Udupa et al., "Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images," Med. Image Anal. 18(5), 752-771 (2014)]. That approach utilized fuzzy modeling strategies, a hierarchical organization of organs, and divided the segmentation task into a recognition step to localize organs which was then followed by a delineation step to demarcate the boundary of organs. It achieved speed and accuracy without employing image/object registration which is commonly utilized in many reported methods, particularly atlas-based. In this paper, our aim is to study how registration may influence performance of the AAR approach. By tightly coupling the recognition and delineation steps, by performing registration in the hierarchical order of the organs, and through several object-specific refinements, the authors demonstrate that improved accuracy for recognition and delineation can be achieved by judicial use of image/object registration. METHODS: The presented approach consists of three processes: model building, hierarchical recognition, and delineation. Labeled binary images for each organ are registered and aligned into a 3D fuzzy set representing the fuzzy shape model for the organ. The hierarchical relation and mean location relation between different organs are captured in the model. The gray intensity distributions of the corresponding regions of the organ in the original image are also recorded in the model. Following the hierarchical structure and location relation, the fuzzy shape model of different organs is registered to the given target image to achieve object recognition. A fuzzy connectedness delineation method is then employed to obtain the final segmentation result of organs with seed points provided by recognition. The authors assess the performance of this method for both nonsparse (compact blob-like) and sparse (thin tubular) objects in the thorax. RESULTS: The results of eight thoracic organs on 30 real images are presented. Overall, the delineation accuracy in terms of mean false positive and false negative volume fractions is 0.34% and 4.02%, respectively, for nonsparse objects, and 0.16% and 12.6%, respectively, for sparse objects. The two object groups achieve mean boundary distance relative to ground truth of 1.31 and 2.28 mm, respectively. CONCLUSIONS: The hierarchical structure and location relation integrated into the model provide the initial pose for registration and make the recognition process efficient and robust. The 3D fuzzy model combined with hierarchical affine registration ensures that accurate recognition can be obtained for both nonsparse and sparse organs. Tailoring the registration process for each organ by specialized similarity criteria and updating the organ intensity properties based on refined recognition improve the overall segmentation process.


Assuntos
Algoritmos , Lógica Fuzzy , Processamento de Imagem Assistida por Computador/métodos , Radiografia Torácica , Tórax/anatomia & histologia , Tomografia Computadorizada por Raios X , Automação , Humanos , Reconhecimento Automatizado de Padrão
20.
Med Phys ; 43(1): 613, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26745953

RESUMO

PURPOSE: Whole-body positron emission tomography/computed tomography (PET/CT) has become a standard method of imaging patients with various disease conditions, especially cancer. Body-wide accurate quantification of disease burden in PET/CT images is important for characterizing lesions, staging disease, prognosticating patient outcome, planning treatment, and evaluating disease response to therapeutic interventions. However, body-wide anatomy recognition in PET/CT is a critical first step for accurately and automatically quantifying disease body-wide, body-region-wise, and organwise. This latter process, however, has remained a challenge due to the lower quality of the anatomic information portrayed in the CT component of this imaging modality and the paucity of anatomic details in the PET component. In this paper, the authors demonstrate the adaptation of a recently developed automatic anatomy recognition (AAR) methodology [Udupa et al., "Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images," Med. Image Anal. 18, 752-771 (2014)] to PET/CT images. Their goal was to test what level of object localization accuracy can be achieved on PET/CT compared to that achieved on diagnostic CT images. METHODS: The authors advance the AAR approach in this work in three fronts: (i) from body-region-wise treatment in the work of Udupa et al. to whole body; (ii) from the use of image intensity in optimal object recognition in the work of Udupa et al. to intensity plus object-specific texture properties, and (iii) from the intramodality model-building-recognition strategy to the intermodality approach. The whole-body approach allows consideration of relationships among objects in different body regions, which was previously not possible. Consideration of object texture allows generalizing the previous optimal threshold-based fuzzy model recognition method from intensity images to any derived fuzzy membership image, and in the process, to bring performance to the level achieved on diagnostic CT and MR images in body-region-wise approaches. The intermodality approach fosters the use of already existing fuzzy models, previously created from diagnostic CT images, on PET/CT and other derived images, thus truly separating the modality-independent object assembly anatomy from modality-specific tissue property portrayal in the image. RESULTS: Key ways of combining the above three basic ideas lead them to 15 different strategies for recognizing objects in PET/CT images. Utilizing 50 diagnostic CT image data sets from the thoracic and abdominal body regions and 16 whole-body PET/CT image data sets, the authors compare the recognition performance among these 15 strategies on 18 objects from the thorax, abdomen, and pelvis in object localization error and size estimation error. Particularly on texture membership images, object localization is within three voxels on whole-body low-dose CT images and 2 voxels on body-region-wise low-dose images of known true locations. Surprisingly, even on direct body-region-wise PET images, localization error within 3 voxels seems possible. CONCLUSIONS: The previous body-region-wise approach can be extended to whole-body torso with similar object localization performance. Combined use of image texture and intensity property yields the best object localization accuracy. In both body-region-wise and whole-body approaches, recognition performance on low-dose CT images reaches levels previously achieved on diagnostic CT images. The best object recognition strategy varies among objects; the proposed framework however allows employing a strategy that is optimal for each object.


Assuntos
Lógica Fuzzy , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X , Imagem Corporal Total , Abdome/anatomia & histologia , Abdome/diagnóstico por imagem , Adulto , Idoso , Automação , Humanos , Masculino , Pessoa de Meia-Idade , Radiografia Abdominal , Tronco/anatomia & histologia , Tronco/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...